Fast Algorithms for Online Stochastic Convex Programming
نویسندگان
چکیده
We introduce the online stochastic Convex Programming (CP) problem, a very general version of stochastic online problems which allows arbitrary concave objectives and convex feasibility constraints. Many wellstudied problems like online stochastic packing and covering, online stochastic matching with concave returns, etc. form a special case of online stochastic CP. We present fast algorithms for these problems, which achieve near-optimal regret guarantees for both the i.i.d. and the random permutation models of stochastic inputs. When applied to the special case online packing, our ideas yield a simpler and faster primal-dual algorithm for this well studied problem, which achieves the optimal competitive ratio. Our techniques make explicit the connection of primal-dual paradigm and online learning to online stochastic CP.
منابع مشابه
Efficient algorithms for online convex optimization and their applications
In this thesis we study algorithms for online convex optimization and their relation to approximate optimization. In the first part, we propose a new algorithm for a general online optimization framework called online convex optimization. Whereas previous efficient algorithms are mostly gradient-descent based, the new algorithm is inspired by the Newton-Raphson method for convex optimization, a...
متن کاملFast Rate Analysis of Some Stochastic Optimization Algorithms
In this paper, we revisit three fundamental and popular stochastic optimization algorithms (namely, Online Proximal Gradient, Regularized Dual Averaging method and ADMM with online proximal gradient) and analyze their convergence speed under conditions weaker than those in literature. In particular, previous works showed that these algorithms converge at a rate of O(lnT/T ) when the loss functi...
متن کاملProjection-free Online Learning
The computational bottleneck in applying online learning to massive data sets is usually the projection step. We present efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps using the Frank-Wolfe technique. We obtain a range of regret bounds for online convex optimization, with better bounds for specific cases such as stochastic ...
متن کاملA Linearly Convergent Conditional Gradient Algorithm with Applications to Online and Stochastic Optimization
Linear optimization is many times algorithmically simpler than non-linear convex optimization. Linear optimization over matroid polytopes, matching polytopes and path polytopes are example of problems for which we have simple and efficient combinatorial algorithms, but whose non-linear convex counterpart is harder and admit significantly less efficient algorithms. This motivates the computation...
متن کاملOn the Generalization Ability of Online Strongly Convex Programming Algorithms
This paper examines the generalization properties of online convex programming algorithms when the loss function is Lipschitz and strongly convex. Our main result is a sharp bound, that holds with high probability, on the excess risk of the output of an online algorithm in terms of the average regret. This allows one to use recent algorithms with logarithmic cumulative regret guarantees to achi...
متن کامل